Skip to content

Conversation

@chenjy2003
Copy link
Contributor

What does this PR do?

This PR will add tiling support for DC-AE (Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models) into the diffusers lib, in order to reduce memory consumption for encoding and decoding super-high-resolution images like 4096x4096.

Could you please have a look? Thanks! @sayakpaul @yiyixuxu
Cc: @lawrence-cj

@sayakpaul sayakpaul requested a review from a-r-r-o-w January 9, 2025 13:37
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@elismasilva
Copy link
Contributor

elismasilva commented Jan 9, 2025

After this PR merged, It will be necessary to implement StableDiffusionMixIn inheritance in SANA pipelines in order to enable tiling and slicing.

@chenjy2003 I think you can implement this: See this signature class with StableDiffusionMixin

class SanaPipeline(DiffusionPipeline, StableDiffusionMixin, SanaLoraLoaderMixin):
    r"""
    Pipeline for text-to-image generation using [Sana](https://huggingface.co/papers/2410.10629).
    """

Ive tested locally with your changes and working.

pipe.enable_vae_slicing()
pipe.enable_vae_tiling()

Copy link
Contributor

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @lawrence-cj! The changes look good.

As mentioned by the @elismasilva, we need to add some methods to the pipeline. Let's not derive from StableDiffusionMixin however, since it contains enable_freeu and disable_freeu.

Would just copy this into the pipeline:

    def enable_vae_slicing(self):
        r"""
        Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
        compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
        """
        self.vae.enable_slicing()

    def disable_vae_slicing(self):
        r"""
        Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
        computing decoding in one step.
        """
        self.vae.disable_slicing()

    def enable_vae_tiling(self):
        r"""
        Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
        compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
        processing larger images.
        """
        self.vae.enable_tiling()

    def disable_vae_tiling(self):
        r"""
        Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
        computing decoding in one step.
        """
        self.vae.disable_tiling()

@FurkanGozukara
Copy link

Dear @lawrence-cj please add this feature to your official SANA pipeline

@chenjy2003
Copy link
Contributor Author

@elismasilva @a-r-r-o-w I've added the four methods enable_vae_slicing, disable_vae_slicing, enable_vae_tiling, and disable_vae_tiling to SanaPipeline.

@elismasilva
Copy link
Contributor

@elismasilva @a-r-r-o-w I've added the four methods enable_vae_slicing, disable_vae_slicing, enable_vae_tiling, and disable_vae_tiling to SanaPipeline.

ok i think you forgot to do the same for this pipeline
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py

@chenjy2003
Copy link
Contributor Author

@elismasilva @a-r-r-o-w I've added the four methods enable_vae_slicing, disable_vae_slicing, enable_vae_tiling, and disable_vae_tiling to SanaPipeline.

ok i think you forgot to do the same for this pipeline https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py

Thanks for pointing out! @lawrence-cj Could you please double-check whether no other SANA pipelines need to be modified?

@lawrence-cj
Copy link
Contributor

We have two pipelines in diffusers. I'm ok with it! Thanks @chenjy2003 for hard work!

@nitinmukesh
Copy link

nitinmukesh commented Jan 10, 2025

Subscribing for it to be merged
NVlabs/Sana#138

@FurkanGozukara
Copy link

what is difference between pipeline_pag_sana.py and pipeline_sana.py

@chenjy2003
Copy link
Contributor Author

what is difference between pipeline_pag_sana.py and pipeline_sana.py

@lawrence-cj Here is a question for you.

@nitinmukesh
Copy link

nitinmukesh commented Jan 10, 2025

@FurkanGozukara @chenjy2003

It was answered here
NVlabs/Sana#107 (comment)

@FurkanGozukara
Copy link

@FurkanGozukara @chenjy2003

It was answered here NVlabs/Sana#107 (comment)

which one is recommended it doesnt tell much

@nitinmukesh
Copy link

@FurkanGozukara @chenjy2003

which one is recommended it doesnt tell much

Based on my test results PAG quality is 60-70% lower. The question remains why create PAG pipeline or in which specific scenario it will be useful.

@FurkanGozukara
Copy link

@FurkanGozukara @chenjy2003

which one is recommended it doesnt tell much

Based on my test results PAG quality is 60-70% lower. The question remains why create PAG pipeline or in which specific scenario it will be useful.

thanks a lot

@lawrence-cj
Copy link
Contributor

Based on my test results PAG quality is 60-70% lower. The question remains why create PAG pipeline or in which specific scenario it will be useful.

PAG is a function. Anyone can test and try to improve it. We just did the basic work for the community.

@nitinmukesh
Copy link

nitinmukesh commented Jan 10, 2025

PAG is a function. Anyone can test and try to improve it. We just did the basic work for the community.

Makes sense. So it's primarily for developers to improve or enhance it.

@bghira
Copy link
Contributor

bghira commented Jan 10, 2025

this guy Furkan should just be blocked to be honest, creates too much noise and demands a lot from others without thanks

@lawrence-cj
Copy link
Contributor

Makes sense. So it's primarily for developers to improve or enhance it.

Correct.

Copy link
Contributor

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Will merge once tests pass

@a-r-r-o-w
Copy link
Contributor

Verified that it works as expected:

import torch
from diffusers import SanaPipeline

pipe = SanaPipeline.from_pretrained(
    "Efficient-Large-Model/Sana_1600M_2Kpx_BF16_diffusers",
    variant="bf16",
    torch_dtype=torch.bfloat16,
)
pipe.to("cuda")
pipe.enable_vae_tiling()

prompt = 'Self-portrait oil painting, a beautiful cyborg with golden hair, 8k'
image = pipe(
    prompt=prompt,
    height=2048,
    width=2048,
    guidance_scale=5.0,
    num_inference_steps=20,
    generator=torch.Generator().manual_seed(42),
)[0]
image[0].save("output.png")

image

@a-r-r-o-w a-r-r-o-w merged commit e7db062 into huggingface:main Jan 11, 2025
12 checks passed
@geronimi73
Copy link
Contributor

Thank you all! Works beautifully
Something worth mentioning in the docs or model card: the AE's default tile_sample_min_height of 512 creates artifacts on large homogenous areas like a blue sky when generating 4k images. Increasing it to 1024 fixed it for the images I looked at and it still runs without OOM on a 3090/4090.

@FurkanGozukara
Copy link

tile_sample_min_height

can you share your code how do you generate?

@geronimi73
Copy link
Contributor

can you share your code how do you generate?

notebook @FurkanGozukara, if you ever get famous with your genious YouTube videos, make sure to mention me in your nobel prize speech

@FurkanGozukara
Copy link

FurkanGozukara commented Jan 11, 2025

can you share your code how do you generate?

notebook @FurkanGozukara, if you ever get famous with your genious YouTube videos, make sure to mention me in your nobel prize speech

thanks a lot. sure if i get famous haha :D

i see you don't use slicing any particular reason?

@bghira
Copy link
Contributor

bghira commented Jan 11, 2025

can you take the unrelated discussion elsewhere?

@lawrence-cj
Copy link
Contributor

lawrence-cj commented Jan 12, 2025

Thank you all! Works beautifully Something worth mentioning in the docs or model card: the AE's default tile_sample_min_height of 512 creates artifacts on large homogenous areas like a blue sky when generating 4k images. Increasing it to 1024 fixed it for the images I looked at and it still runs without OOM on a 3090/4090.

How much GPU memory is needed for 3090/4090? I tested on A100, requiring 22GB. @geronimi73

@geronimi73
Copy link
Contributor

How much GPU memory is needed for 3090/4090? I tested on A100, requiring 22GB.

What I meant what was that generating 4096x4096px images with tile_sample_min_height=1024 runs on a GPU with 24GB VRAM like a NVIDIA GefForce RTX 3090 or 4090. Sorry for the confusion.

@lawrence-cj
Copy link
Contributor

Oh nevermind. Thanks man @geronimi73

@FurkanGozukara
Copy link

I updated my Gradio APP

4k model now works as low as 8GB GPU VRAM

Diffusers pipeline has amazing improvements

When all enabled 1k 1024x1024 = 4 GB VRAM GPUs (1)

@elismasilva
Copy link
Contributor

How much GPU memory is needed for 3090/4090? I tested on A100, requiring 22GB.

What I meant what was that generating 4096x4096px images with tile_sample_min_height=1024 runs on a GPU with 24GB VRAM like a NVIDIA GefForce RTX 3090 or 4090. Sorry for the confusion.

My 3060 Ti 8GB runs great, if you enable cpu offload and vae slicing and vae tiling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants